1 Introduction

We’ll use the Credit dataframe from the ISLR package to demonstrate multiple regression with:

  1. A numerical outcome variable y, in this case credit card balance.

  2. Two explanatory variables:

    • A first numerical explanatory variable \(x_1\). In this case, their credit limit.

    • A second numerical explanatory variable \(x_2\). In this case, their income (in thousands of dollars).

Note: This dataset is not based on actual individuals, it is a simulated dataset used for educational purposes.

2 Exploratory data analysis

Let’s load the Credit data and:

# Write your code here
#

Let’s look at some summary statistics for the variables that we need for the problem at hand.

# Write your code here
#

Let’s also look at better histograms as visual aids.

We observe for example:

Since our outcome variable Balance and the explanatory variables Limit and Income are numerical, we can and have to compute the correlation coefficient between pairs of these variables before we proceed to build a model.

#Write the code to find the correlations:

Note: Collinearity (or multicollinearity) is a phenomenon in which one explanatory variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. So in this case, if we knew someone’s credit card Limit and since Limit and Income are highly correlated, we could make a fairly accurate guess as to that person’s Income. Or put loosely, these two variables provided redundant information. For now let’s ignore any issues related to collinearity and press on.

Let’s visualize the relationship of the outcome variable with each of the two explanatory variables in two separate plots:

To get a sense of the joint relationship of all three variables simultaneously through a visualization, let’s display the data in a 3-dimensional (3D) scatterplot, where

  1. The numerical outcome variable \(y\) Balance is on the z-axis (vertical axis)

  2. The two numerical explanatory variables form the “floor” axes. In this case

* The first numerical explanatory variable $x_1$, Income is on of the floor axes.
* The second numerical explanatory variable $x_2$, Limit is on the other floor axis.
# draw 3D scatterplot
p <- plot_ly(data = Credit, z = ~Balance, x = ~Income, y = ~Limit, opacity = 0.6, color = Credit$Balance) %>%
  add_markers() 
p

Exercise

Conduct a new exploratory data analysis with the same outcome variable \(y\) being Balance but with Rating and Age as the new explanatory variables \(x_1\) and \(x_2\). Remember, this involves three things:

  1. Looking at the raw values
  2. Computing summary statistics of the variables of interest.
  3. Creating informative visualizations

What can you say about the relationship between a credit card holder’s balance and their credit rating and age?

2.1 Multiple regression

We now use a + to consider multiple explanatory variables. Here is the syntax:

model_name <- lm(y ~ x1 + x2 + ... +xn, data = data_frame_name)
Balance_model <- lm(Balance ~ Limit + Income, data = Credit)
Balance_model

Call:
lm(formula = Balance ~ Limit + Income, data = Credit)

Coefficients:
(Intercept)        Limit       Income  
  -385.1793       0.2643      -7.6633  
# Or use one of the followings to see more info...

get_regression_table(Balance_model)
# A tibble: 3 x 7
  term      estimate std_error statistic p_value lower_ci upper_ci
  <chr>        <dbl>     <dbl>     <dbl>   <dbl>    <dbl>    <dbl>
1 intercept -385.       19.5       -19.8       0 -423.    -347.   
2 Limit        0.264     0.006      45.0       0    0.253    0.276
3 Income      -7.66      0.385     -19.9       0   -8.42    -6.91 
#summary(Balance_model)

Write your model equation here:

    * 

How do we interpret these three values that define the regression plane?

  • Intercept: -$385.18 (rounded to two decimal points to represent cents). The intercept in our case represents the credit card balance for an individual who has both a credit Limit of $0 and Income of $0.

    • In our data however, the intercept has limited (or no) practical interpretation as ….
  • Limit: $0.26. (Now that we have multiple variables to consider, we have to add a caveat to our interpretation:) Holding all the other variables fixed (Income, in this case), for every increase of one unit in credit Limit (dollars), there is an associated increase of on average $0.26 in credit card balance.

  • Income: -$7.66. Similarly, Holding all the other variables fixed (Limit, in this case),, for every increase of one unit in Income (in other words, $10,000 in income), there is an associated decrease of on average $7.66 in credit card balance.

WAIT! Did something go wrong? Interpretation of the Income coefficient is alarming.

Recall in individual scatterplots that when considered, both Limit and Income had positive relationships with the outcome variable Balance. As card holders’ credit limits increased their credit card balances tended to increase as well, and a similar relationship held for incomes and balances. In the above multiple regression, however, the slope for Income is now -7.66, suggesting a negative relationship between income and credit card balance. What explains these contradictory results?

This is known as **Simpson’s Paradox**, a phenomenon in which a trend appears in several different groups of data but disappears or reverses when these groups are combined. 

2.2 Observed/fitted values and residuals

As we did previously in ch 5, let’s look at the fitted values and residuals.

# get the following table

2.2.1 Diagnostics (Residual plot)

ggplot(Balance_model, aes(x = .fitted, y = .resid)) + geom_point()

2.2.2 Let’s use the model to make predictions (Pretending that the model is good)

Kevin has a credit limit of $5080 and his income is $150,000. Use the Balance_model to predict Kevin’s balance.

newx <- data.frame(Limit = _____, Income = ____)

predicted_balance <- predict(Balance_model, newx)
predicted_balance

Exercise

Fit a new linear regression using where Rating and Age are the new numerical explanatory variables \(x_1\) and \(x_2\).

2.3 One numerical & one categorical explanatory variable

Let’s revisit the instructor evaluation data introduced in Ch 6.

Consider a modeling scenario with:

  1. A numerical outcome variable \(y\). As before, instructor evaluation score.

  2. Two explanatory variables:

    • A numerical explanatory variable \(x_1\): in this case, their age.

    • A categorical explanatory variable \(x_2\): in this case, their binary gender.

2.3.1 Exploratory data analysis

Let’s reload the evals data and select() only the needed subset of variables. Note that these are different than the variables chosen in Chapter 5. Let’s given this the name evals_ch6.

  1. Let’s look at the raw data values both by using View() and the glimpse() functions.

  2. Let’s look at some summary statistics using the skim() function from the skimr package:

Describe the output: Homework

  1. let’s compute the correlation between two numerical variables we have score and age. Recall that correlation coefficients only exist between numerical variables.

We observe that they are _______ and __________ correlated.

Now, let’s try to visualize the correlation.

Create a scatterplot of score over age. Use the binary gender variable to color the point with two colors. Add a regression line (or two?) in to your scatterplot.

Say a couple of interesting things about the graph you’ve created.

2.3.2 Multiple regression: Parallel slopes model

Much like we started to consider multiple explanatory variables using the + sign in the previous section, let’s fit a regression model and get the regression table.

score_model_2 <- lm(score ~ age + gender, data = evals_ch6)
summary(score_model_2)

Call:
lm(formula = score ~ age + gender, data = evals_ch6)

Residuals:
     Min       1Q   Median       3Q      Max 
-1.82833 -0.33494  0.09391  0.42882  0.91506 

Coefficients:
             Estimate Std. Error t value             Pr(>|t|)    
(Intercept)  4.484116   0.125284  35.792 < 0.0000000000000002 ***
age         -0.008678   0.002646  -3.280             0.001117 ** 
gendermale   0.190571   0.052469   3.632             0.000313 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.5343 on 460 degrees of freedom
Multiple R-squared:  0.03901,   Adjusted R-squared:  0.03484 
F-statistic: 9.338 on 2 and 460 DF,  p-value: 0.0001059
get_regression_table(score_model_2)
# A tibble: 3 x 7
  term       estimate std_error statistic p_value lower_ci upper_ci
  <chr>         <dbl>     <dbl>     <dbl>   <dbl>    <dbl>    <dbl>
1 intercept     4.48      0.125     35.8    0        4.24     4.73 
2 age          -0.009     0.003     -3.28   0.001   -0.014   -0.003
3 gendermale    0.191     0.052      3.63   0        0.087    0.294

The modeling equation for this scenario is: (Write the model you obtained)

Write the model for male:

Write the model for female:

Let’s create the scatterplot of score over age AGAIN. Use the binary gender variable to color the point with two colors. Add a regression lines in to your scatterplot BUT, this time use the model(s) we created.

Interpretations of the coefficients:

  • \(b_{male} = 0.1906\) is the average difference in teaching score that men get relative to the baseline of women.

  • Accordingly, the intercepts (which in this case make no sense since no instructor can have an age of 0) are :

    for women: \(b_0= 4.484\)

    for men: \(b_0 +b_{male} = 4.484 + 0.191 = 4.675\)

  • Both men and women have the same slope. In other words, in this model the associated effect of age is the same for men and women. So for every increase of one year in age, there is on average an associated decrease of \(b_{age} = -0.009\) in teaching score.

Warning! ⚠

But wait!!!, why is the lines in the original scatterplot different than the lines in this one? What is going on?

  • What we have in the original plot is known as an interaction effect between age and gender.
  • Focusing on fitting a model for each of men and women, we see that the resulting regression lines are different.
  • Thus, gender appears to interact in different ways for men and women with the different values of age.

Once you nitice an interaction effect on the original scatterplot, you will have to create a model with interaction…(next section)

2.3.3 Multiple regression: Interaction model

We say a model has an interaction effect if the associated effect of one variable depends on the value of another variable. These types of models usually prove to be tricky to view on first glance because of their complexity. In this case, the effect of age will depend on the value of gender. (as was suggested by the different slopes for men and women in our visual exploratory data analysis)

Let’s fit a regression with an interaction term. We add an interaction term using the * sign. Let’s fit this regression and save it in score_model_interaction, then we get the regression table using the get_regression_table() function as before.

score_model_interaction <- lm(score ~ age + gender + age * gender, data = evals_ch6)
get_regression_table(score_model_interaction)
# A tibble: 4 x 7
  term           estimate std_error statistic p_value lower_ci upper_ci
  <chr>             <dbl>     <dbl>     <dbl>   <dbl>    <dbl>    <dbl>
1 intercept         4.88      0.205     23.8    0        4.48     5.29 
2 age              -0.018     0.004     -3.92   0       -0.026   -0.009
3 gendermale       -0.446     0.265     -1.68   0.094   -0.968    0.076
4 age:gendermale    0.014     0.006      2.45   0.015    0.003    0.024

The modeling equation for this scenario is (Writing the equation):

\[\hat{y} = b_0 + b_1 \cdot x_1 + b_2 \cdot x_2 + b_3 \cdot x_1 \cdot x_2\]

\[\hat{score} = 4.883 - 0.018 \cdot age - 0.446 \cdot 1_{Male}(x) + 0.014 \cdot age \cdot 1_{Male}(x) \]

Write the model for male:

\[\hat{score} = 4.883 - 0.018 \cdot age - 0.446 \cdot 1 + 0.014 \cdot age \cdot 1 \] \[\hat{score} = 4.437 - 0.004 \cdot age \]

Write the model for female:

\[\hat{score} = 4.883 - 0.018 \cdot age - 0.446 \cdot 0 + 0.014 \cdot age \cdot 0 \] \[\hat{score} = 4.883 - 0.018 \cdot age \]

We see that while male instructors have a lower intercept, as they age, they have a less steep associated average decrease in teaching scores: 0.004 teaching score units per year as opposed to -0.018 for women. This is consistent with the different slopes and intercepts of the red and blue regression lines fit in the original scatter plot.

2.3.4 Observed/fitted values and residuals